Artificial intelligence is now part of the conditions of teaching and learning. Some faculty will restrict student AI use; others will integrate it intentionally. Either approach can be rigorous and defensible when expectations are clear and assessments measure what students actually learn.

You do not need to be an AI expert to teach well in an AI-present environment. Start by setting expectations, then use assessment design strategies that make student thinking visible.

Icon of a piece of paper with a checkmark on it.

AI Course Policies and Syllabus Language

Clear options and copy-ready language to define how generative AI may be used in your course.

Course Policies
Icon of a paper with an A+ grade on it.

AI and Assessment

Guidance for designing assessments that still measure learning and support academic integrity in an AI-present environment.

Assessment
Question Mark icon

Request Teaching Support

Get individualized support with AI policies, assessment design, or instructional questions related to your course.

Support

Start Here

Most instructors take one of three practical approaches to generative AI. Any of these approaches can be effective when expectations are clear and reinforced at the assignment level.

Choose the option that best matches your course goals and comfort level. You can adjust over time.

Once you identify your starting point, you can take the practical next steps.

I need a syllabus policy and copy/paste language.
I need to redesign an assignment or assessment qui
I am teaching asynchronously and need online-speci
I suspect unauthorized AI use and need a fair resp
I want to use AI to support teaching tasks.

Using AI as a Teaching Partner

If you use generative AI in your teaching practice, treat it as support for planning and drafting, not as an authority. Verify content, check for bias, and do not enter sensitive information.

Diversity

Example prompt: Generate three diverse scenarios that illustrate ethos, pathos, and logos. For each scenario, include one discussion question that requires students to apply the concept.

Explain

Example prompt: Explain a “nudge” within the realm of behavioral economics, doing so for students new to this topic. Provide one everyday example and one discipline-specific example. End with three self-check questions.

Enrich

Example prompt: Present social learning theory in three formats: a short explanation, a worked example, and a common misconception with a correction.

Review

Example prompt: Review the assignment description below. Identify where students may misunderstand expectations and suggest clearer language. Recommend a rubric outline aligned to the learning outcomes. (Include copy-and-pasted description following prompt.)

Academic Integrity and the Reality of AI Detection Tools

The growth of generative AI presents real challenges for academic integrity. At MSU Denver, however, the decision has been made not to adopt an enterprise-level AI detection tool due to well-documented methodological, ethical, and procedural limitations associated with these technologies.

Why We Move Beyond AI Detection Scores

Reliance on AI detection tools raises significant concerns that limit their usefulness in academic contexts:

  • Unverifiable probabilistic estimates: AI detection tools do not operate like plagiarism software, which identifies direct matches to known sources. Instead, they generate likelihood scores based on stylistic patterns associated with generative AI. These scores represent statistical inference rather than evidence. Because the underlying criteria are proprietary and not independently verifiable, there is no defensible way to confirm that a piece of student work was generated by AI rather than written in a conventional or formulaic style.
  • False positives and false negatives: AI detection systems are inherently unreliable. They produce false positives that risk incorrectly flagging student work, particularly when the actual rate of AI use is unknown. At the same time, they frequently fail to identify AI-assisted work, as modest editing or paraphrasing is often sufficient to evade detection. This imbalance undermines both fairness and trust.
  • A false model of authorship: Most AI detection tools rely on a binary assumption that writing is either fully human or fully AI-generated. This assumption does not reflect how students actually work. In practice, many students use hybrid approaches that may include AI for brainstorming, outlining, or language refinement. Attempting to enforce academic integrity based on this false dichotomy is both conceptually flawed and difficult to apply consistently.

A More Defensible and Pedagogically Sound Approach

Rather than relying on surveillance-based tools, MSU Denver emphasizes a proactive, human-centered approach grounded in clear expectations, transparency, and intentional assessment design. This approach prioritizes student learning, preserves trust, and provides faculty with more reliable ways to evaluate student understanding.

Guidance on AI and Assessment

Faculty are encouraged to consult the AI and Assessment resource page for practical guidance on setting clear expectations for AI use, designing assessments aligned with learning outcomes, and responding appropriately when questions arise about student work. The resource includes concrete examples, assessment strategies, and recommended practices that are pedagogically sound and institutionally defensible.


Institutional Note on Privacy and Security: Submitting student work to third-party AI detection platforms may introduce security and privacy risks. Many tools lack transparency about data storage, secondary use, or whether submitted content may be incorporated into commercial datasets. Faculty should exercise caution and prioritize approaches that protect student data and comply with university data and privacy standards.

Additional Resources